AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
AI Safety Alignment

# AI Safety Alignment

Beaver 7b V3.0 GGUF
Beaver-7B-v3.0 is a 7B-parameter large language model based on the LLaMA architecture, focusing on safety and human feedback reinforcement learning (RLHF).
Large Language Model English
B
mradermacher
405
1
Aligner 7b V1.0
A model-agnostic plug-and-play module suitable for open-source and API-based models, enhancing AI safety through residual correction strategies
Large Language Model Transformers English
A
aligner
109
13
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase